Goto

Collaborating Authors

 dependability cage


A Method for the Runtime Validation of AI-based Environment Perception in Automated Driving System

Aslam, Iqra, Buragohain, Abhishek, Bamal, Daniel, Aniculaesei, Adina, Zhang, Meng, Rausch, Andreas

arXiv.org Artificial Intelligence

Environment perception is a fundamental part of the dynamic driving task executed by Autonomous Driving Systems (ADS). Artificial Intelligence (AI)-based approaches have prevailed over classical techniques for realizing the environment perception. Current safety-relevant standards for automotive systems, International Organization for Standardization (ISO) 26262 and ISO 21448, assume the existence of comprehensive requirements specifications. These specifications serve as the basis on which the functionality of an automotive system can be rigorously tested and checked for compliance with safety regulations. However, AI-based perception systems do not have complete requirements specification. Instead, large datasets are used to train AI-based perception systems. This paper presents a function monitor for the functional runtime monitoring of a two-folded AI-based environment perception for ADS, based respectively on camera and LiDAR sensors. To evaluate the applicability of the function monitor, we conduct a qualitative scenario-based evaluation in a controlled laboratory environment using a model car. The evaluation results then are discussed to provide insights into the monitor's performance and its suitability for real-world applications.


Connected Dependability Cage Approach for Safe Automated Driving

Aniculaesei, Adina, Aslam, Iqra, Bamal, Daniel, Helsch, Felix, Vorwald, Andreas, Zhang, Meng, Rausch, Andreas

arXiv.org Artificial Intelligence

Automated driving systems can be helpful in a wide range of societal challenges, e.g., mobility-on-demand and transportation logistics for last-mile delivery, by aiding the vehicle driver or taking over the responsibility for the dynamic driving task partially or completely. Ensuring the safety of automated driving systems is no trivial task, even more so for those systems of SAE Level 3 or above. To achieve this, mechanisms are needed that can continuously monitor the system's operating conditions, also denoted as the system's operational design domain. This paper presents a safety concept for automated driving systems which uses a combination of onboard runtime monitoring via connected dependability cage and off-board runtime monitoring via a remote command control center, to continuously monitor the system's ODD. On one side, the connected dependability cage fulfills a double functionality: (1) to monitor continuously the operational design domain of the automated driving system, and (2) to transfer the responsibility in a smooth and safe manner between the automated driving system and the off-board remote safety driver, who is present in the remote command control center. On the other side, the remote command control center enables the remote safety driver the monitoring and takeover of the vehicle's control. We evaluate our safety concept for automated driving systems in a lab environment and on a test field track and report on results and lessons learned.


Towards exploring adversarial learning for anomaly detection in complex driving scenes

Habib, Nour, Cho, Yunsu, Buragohain, Abhishek, Rausch, Andreas

arXiv.org Artificial Intelligence

One of the many Autonomous Systems (ASs), such as autonomous driving cars, performs various safety-critical functions. Many of these autonomous systems take advantage of Artificial Intelligence (AI) techniques to perceive their environment. But these perceiving components could not be formally verified, since, the accuracy of such AI-based components has a high dependency on the quality of training data. So Machine learning (ML) based anomaly detection, a technique to identify data that does not belong to the training data could be used as a safety measuring indicator during the development and operational time of such AI-based components. Adversarial learning, a sub-field of machine learning has proven its ability to detect anomalies in images and videos with impressive results on simple data sets. Therefore, in this work, we investigate and provide insight into the performance of such techniques on a highly complex driving scenes dataset called Berkeley DeepDrive.